(PART*) Module overview

Module Introduction

Placeholder

Welcome

Moodle

Module overview

Troubleshooting

Acknowledgements

(PART*) Foundational Concepts

1 Geocomputation: An Introduction

Placeholder

1.1 Lecture recording

1.2 Reading list

1.3 Getting started

1.4 Software

1.4.1 QGIS Installation

1.4.2 R and RStudio Installation

1.4.3 UCL Desktop and RStudio Server

1.4.4 A note on ArcGIS

1.5 File management

1.6 Before you leave

2 GIScience and GIS software

Placeholder

2.1 Lecture recording

2.2 Reading list

2.3 Simple digitisation of spatial features

Questions

2.4 Population change in London

2.4.1 Setting the scene

2.4.2 Finding our data sets

2.4.3 Downloading and processing

2.4.3.1 Administrative Geography Boundaries

2.4.3.2 Population data

2.4.3.3 Cleaning our population data sets

2.4.4 Using QGIS to map our population data

2.4.4.1 Setting up the project

2.4.4.2 Adding layers

2.4.4.3 Conducting an attribute join

2.4.5 Exporting map for visual analysis

2.5 Assignment

2.6 Before you leave

3 Cartography and Visualisation

Placeholder

3.1 Lecture recording

3.2 Reading list

3.3 Crime in London

3.3.1 Finding our data sets

Crime data

Population data

3.3.2 Downloading and processing

Borough population

Ward population

Crime data

File download

3.3.3 Using QGIS to map our crime data

3.3.3.1 Setting up the project

3.3.3.2 Adding layers

3.3.3.3 Counting points-in-polygons

Tips

3.3.3.4 Mapping our crime data

3.4 Assignment

Tips

3.5 Before you leave

4 Programming for Data Analysis

Placeholder

4.1 Lecture recording

4.2 Reading list

4.3 Programming

4.4 Programming in R

4.4.1 The RStudio interface

4.5 RStudio console

4.5.1 Command Input

4.5.2 Storing variables

4.5.3 Accessing and returning variables

4.5.4 Variables of different data types

4.5.5 Calling functions on our variables

4.5.6 Returning functions

4.5.7 Examining our variables using functions

4.5.8 Creating a two-element object

4.6 Simple analysis

4.6.1 Housekeeping

4.6.2 Atomic vectors

4.6.3 Matrices

4.6.4 Dataframes

4.6.5 Column names

4.6.6 Adding columns

4.7 Crime analysis I

4.7.1 Starting a project

4.7.2 Setting up a script

4.7.3 Running a script

To run line-by-line

To run the whole script

Stopping a script from running

4.7.4 Crime data

4.7.4.1 Reading data into R

4.7.4.2 Inspecting data in R

4.8 Assignment

4.9 Before you leave

5 Programming for Spatial Analysis

Placeholder

5.1 Lecture recording

5.2 Reading list

5.3 Crime analysis II

5.3.1 Data preparation

5.3.2 Spatial analysis set up

5.3.3 Interacting with spatial data

5.3.4 Getting our crime data in shape

5.3.4.1 Attribute join

5.3.5 Data wrangling

5.3.5.1 Selection with base R

5.3.5.2 Selection with dplyr

5.3.6 Improving your workflow

5.3.7 Aggregate crime by ward

5.3.8 Joining crime data to wards

5.3.9 Mapping crime data

Question

5.3.10 Styling crime data

5.3.11 Exporting our crime data

5.4 Assignment

5.5 Before you leave

(PART*) Core Spatial Analysis

6 Analysing Spatial Patterns I: Geometric Operations and Spatial Queries

Placeholder

6.1 Lecture recording

6.2 Reading list

6.3 Bike theft in London

6.3.1 Housekeeping

6.3.2 Loading data

File download

6.3.3 Data preparation

6.3.4 Spatial operations I

6.3.4.1 Geometric operations

6.3.4.2 Spatial queries

6.3.4.3 File export

6.3.5 Spatial operations II

6.3.5.1 Geometric operations

6.3.5.2 Spatial queries

6.3.5.3 Theft at train and tube locations?

6.4 Assignment

6.5 Before you leave

7 Analysing Spatial Patterns II: Spatial Autocorrelation

Placeholder

7.1 Lecture recording

7.2 Reading list

7.3 Childhood obesity

7.3.1 Housekeeping

7.3.2 Loading data

7.3.3 Data preparation

7.4 Statistical distributions

7.5 Assignment 1

7.6 Spatial distributions

7.6.1 Spatial lag

7.6.2 Defining neighbours

7.6.3 Spatial weights matrix

7.6.4 Global Moran’s I

7.6.5 Geary C’s

7.6.6 Getis-Ord General G

7.6.7 Local Moran’s I

7.6.8 Getis-Gi*

7.7 Assignment 2

7.8 Before you leave

8 Analysing Spatial Patterns III: Point Pattern Analysis

This week, we will be looking at how we can use Point Pattern Analysis (PPA) to detect and delineate clusters within point data. Within point pattern analysis, we look to detect clusters or patterns across a set of points, including measuring density, dispersion and homogeneity in our point structures. There are several approaches to calculating and detecting these clusters, which are explained in our main lecture. We then deploy several PPA techniques, including Kernel Density Estimation, on our bike theft data to continue our investigation from the week before last.

8.1 Lecture recording

  • Lecture W8

8.2 Reading list

  • Reading #1
  • Reading #2

8.3 Bike theft in London

This week, we continue to investigate bike theft in London in 2019 - as we look to confirm our very simple hypothesis: that bike theft primarily occurs near tube and train stations. This week, instead of looking at the distance of individual bike thefts from train stations, we will look to analyse the distribution of clusters in relation to the stations. We will first look at this visually and then look to compare these clusters to the location of train and tube stations quantitatively using geometric operations.

To complete this analysis, we will again use the following data sets: - Bike theft in London for 2019 from data.police.uk - Train and Tube Stations from Transport for London

8.3.1 Housekeeping

Let’s get ourselves ready to start our lecture and practical content by first downloading the relevant data and loading this within our script.

Open a new script within your GEOG0030 project and save this script as wk8-bike-theft-PPA.r. At the top of your script, add the following metadata (substitute accordingly):

# Analysing bike theft and its relation to stations using point pattern analysis
# Data: January 2021
# Author: Justin

All of the geometric operations and spatial queries we will use are contained within the sf library. For our Point Pattern Analysis, we will be using the spatstat library (“spatial statistics”). The spatstat library contains the different Point Pattern Analysis techniques we will want to use in this practical. We will also need the raster library, which provides classes and functions to manipulate geographic (spatial) data in ‘raster’ format. We will use this package briefly today, but look into it in more detail next week. We’ll also using the rosm library (“R OSM”), which provides access to and plots OpenStreetMap and Bing Maps tiles to create high-resolution basemaps. Lastly, you will also need to install dbscan and leaflet.

# libraries
library(tidyverse)
library(sf)
library(tmap)
library(janitor)
library(spdep)
library(RColorBrewer)
library(raster)
library(spatstat)
library(rosm)
library(dbscan)
library(leaflet)

8.3.2 Loading data

This week, we will continue to use our data from Week 06. This includes:

You should load these data sets as new variables in this week’s script. You should have the original data files for both the London Wards and 2019 crime already in your raw data folder.

Note
If you did not export your OpenStreetMap train and tube stations from our practical in week W06, you will need to re-run parts of your code to download and then export the OpenStreetMap data. If this is the case, open your wk6-bike-theft-analysis.r and make sure you create a shapefile of the train and tube stations.

Let’s go ahead and load all of our data at once - we did our due diligence in Week 06 and know what our data looks like and what CRS they are in, so we can go ahead and use pipes to make loading our data more efficient:

# read in our 2018 London Ward boundaries
london_ward_shp <- read_sf("data/raw/boundaries/2018/London_Ward.shp")

# read in our OSM tube and train stations data
london_stations_osm <- read_sf("data/raw/transport/osm_stations.shp")

# read in our crime data csv from our raw data folder
bike_theft_2019 <- read_csv("data/raw/crime/crime_all_2019_london.csv") %>% 
  # clean names with janitor
  clean_names() %>%
  # filter according to crime type and ensure we have no NAs in our data set
  filter(crime_type == "Bicycle theft" & !is.na(longitude) & !is.na(latitude)) %>% 
  # select just the longitude and latitude columns
  dplyr::select(longitude, latitude) %>%
  # transform into a point spatial dataframe
  # note providing the columns as the coordinates to use
  # plus the CRS, which as our columns are long/lat is WGS84/4236
  st_as_sf(coords = c("longitude", "latitude"), crs = 4236) %>% 
  # convert into BNG
  st_transform(27700) %>% 
  # clip to London
  st_intersection(london_ward_shp)
## Warning: attribute variables are assumed to be spatially constant throughout all
## geometries

Let’s create a quick map of our data to check it loaded correctly:

# plot our London Wards first
tm_shape(london_ward_shp) + tm_fill() + 
  # then add bike crime as blue
  tm_shape(bike_theft_2019) + tm_dots(col = "blue") + 
  # then add our stations as red
  tm_shape(london_stations) + tm_dots(col = "red") + 
  # then add a north arrow
  tm_compass(type = "arrow", position = c("right", "bottom")) + 
  # then add a scale bar
  tm_scale_bar(breaks = c(0, 5, 10, 15, 20), position = c("left", "bottom"))
## Warning: In view mode, scale bar breaks are ignored.

Great - that looks familiar! This means we can move forward with our data analysis and theoretical content for this week.

8.4 Assignment

8.5 Before you leave

(PART*) Advanced Spatial Analysis

9 Rasters, Zonal Statistics and Interpolation

9.1 Lecture recording

  • Lecture W9

9.2 Reading list

  • Reading #1
  • Reading #2

9.3 Assignment

9.4 Before you leave

10 Network Analysis

10.1 Lecture recording

  • Lecture W10

10.2 Reading list

  • Reading #1
  • Reading #2

10.3 Assignment

10.4 Before you leave